On a Markov Game with One-Sided Incomplete Information
نویسندگان
چکیده
منابع مشابه
ON A MARKOV GAME WITH ONE-SIDED INCOMPLETE INFORMATION By
We apply the average cost optimality equation to zero-sum Markov games, by considering a simple game with one-sided incomplete information that generalizes an example of Aumann and Maschler (1995). We determine the value and identify the optimal strategies for a range of parameters.
متن کاملOn a Markov Game with Incomplete Information
We consider an example of a Markov game with lack of information on one side, that was first introduced by Renault (2002). We compute both the value and optimal strategies for a range of parameter values. ∗MEDS Department, Kellogg School of Management, Northwestern University, and Département Finance et Economie, HEC, 1, rue de la Libération, 78 351 Jouy-en-Josas, France. e-mail: j-horner@kello...
متن کاملTwo-sided Matching with Incomplete Information∗
Stability in a two-sided matching model with non-transferrable utility and with incomplete information is investigated. Each agent has interdependent preferences which depend on his own type and on the possibly unknown types of agents on the other side of the market. In a one-sided incomplete information model in which workers’ types are private information, a firm joins a worker in a block to ...
متن کاملOn a Continuous-Time Game with Incomplete Information
For zero-sum two-player continuous-time games with integral payoff and incomplete information on one side, one shows that the optimal strategy of the informed player can be computed through an auxiliary optimization problem over some martingale measures. One also characterizes the optimal martingale measures and compute it explicitely in several examples.
متن کاملLearning in Markov Games with Incomplete Information
The Markov game (also called stochastic game (Filar & Vrieze 1997)) has been adopted as a theoretical framework for multiagent reinforcement learning (Littman 1994). In a Markov game, there are n agents, each facing a Markov decision process (MDP). All agents’ MDPs are correlated through their reward functions and the state transition function. As Markov decision process provides a theoretical ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SSRN Electronic Journal
سال: 2009
ISSN: 1556-5068
DOI: 10.2139/ssrn.1499219